In this paper we look into the conjecture of Entezari et al. (2021) which states that if the permutation invariance of neural networks is taken into account, then there is likely no loss barrier to the linear interpolation between SGD solutions. First, we observe that neuron alignment methods alone are insufficient to establish low-barrier linear connectivity between SGD solutions due to a phenomenon we call variance collapse: interpolated deep networks suffer a collapse in the variance of their activations, causing poor performance. Next, we propose REPAIR (REnormalizing Permuted Activations for Interpolation Repair) which mitigates variance collapse by rescaling the preactivations of such interpolated networks. We explore the interaction between our method and the choice of normalization layer, network width, and depth, and demonstrate that using REPAIR on top of neuron alignment methods leads to 60%-100% relative barrier reduction across a wide variety of architecture families and tasks. In particular, we report a 74% barrier reduction for ResNet50 on ImageNet and 90% barrier reduction for ResNet18 on CIFAR10.
translated by 谷歌翻译
我们研究了不同修剪技术对具有对比损失功能的深神经网络所学的表示的影响。我们的工作发现,相对于经过传统的跨透明损失训练的模型,在高稀疏度水平上,对比度学习的示例数量更高。为了理解这种明显的差异,我们使用派(Hooker等,2019),Q-Score(Kalibhat等,2022)和PD-Score(Baldock等,2021)等指标(Hooker等,2019),测量修剪对学习的表示质量的影响。我们的分析表明,修剪方法实施的时间表很重要。我们发现,当在训练阶段早期引入修剪时,稀疏性对学习表示的质量的负面影响最高。
translated by 谷歌翻译
本文研究了静态稀疏对训练有素网络对扰动,数据腐败和对抗性示例的鲁棒性的影响。我们表明,通过增加网络宽度和深度,同时保持网络容量固定,稀疏网络始终匹配,并且通常优于其最初密集的版本,从而达到了一定的稀疏性。由于网络层之间的连通性松动而导致非常高的稀疏性同时下降。我们的发现表明,文献中观察到的网络压缩引起的快速鲁棒性下降是由于网络容量降低而不是稀疏性。
translated by 谷歌翻译
最近,修剪深度神经网络(DNNS)因提高准确性和泛化功率,降低网络规模以及提高专业硬件的推理速度而受到了很多关注。尽管修剪主要在计算机视觉任务上进行了测试,但几乎没有探索其在医学图像分析中的应用。这项工作调查了众所周知的修剪技术,即层和网络范围的修剪,对组织学图像中细胞核实例分割性能的影响。我们利用的实例分割模型由两个主要分支组成:(1)语义分割分支,以及(2)深层回归分支。我们研究了修剪对两个分支的性能的影响分别对两个分支的性能以及最终的核实例分割结果。在两个公开可用数据集上进行了评估,我们的结果表明,层修剪的性能比在较小的压缩比(CRS)的网络修剪方面稍好,而对于大型CRS,网络范围的修剪会产生出色的性能。对于语义分割,深度回归和最终实例分割,可以通过层的修剪来修剪93.75%,95%和80%的模型权重,而相应模型的性能降低了2%。
translated by 谷歌翻译
在本文中,我们推测,如果考虑到神经网络的置换不变性,SGD解决方案可能不会在它们之间的线性插值中没有障碍。尽管这是一个大胆的猜想,但我们展示了广泛的经验尝试却没有反驳。我们进一步提供了初步的理论结果来支持我们的猜想。我们的猜想对彩票票证假设,分布式培训和合奏方法有影响。
translated by 谷歌翻译
Recent advances in deep learning research, such as transformers, have bolstered the ability for automated agents to generate creative texts similar to those that a human would write. By default, transformer decoders can only generate new text with respect to previously generated text. The output distribution of candidate tokens at any position is conditioned on previously selected tokens using a self-attention mechanism to emulate the property of autoregression. This is inherently limiting for tasks such as controllable story generation where it may be necessary to condition on future plot events when writing a story. In this work, we propose Future Sight, a method for finetuning a pretrained generative transformer on the task of future conditioning. Transformer decoders are typically pretrained on the task of completing a context, one token at a time, by means of self-attention. Future Sight additionally enables a decoder to attend to an encoded future plot event. This motivates the decoder to expand on the context in a way that logically concludes with the provided future. During inference, the future plot event can be written by a human author to steer the narrative being generated in a certain direction. We evaluate the efficacy of our approach on a story generation task with human evaluators.
translated by 谷歌翻译
Current large language models can perform reasonably well on complex tasks that require step-by-step reasoning with few-shot learning. Are these models applying reasoning skills they have learnt during pre-training and reason outside of their training context, or are they simply memorizing their training corpus at finer granularity and have learnt to better understand their context? To tease apart these possibilities, we introduce ALERT, a benchmark and suite of analyses for assessing language models' reasoning ability comparing pre-trained and finetuned models on complex tasks that require reasoning skills to solve. ALERT provides a test bed to asses any language model on fine-grained reasoning skills, which spans over 20 datasets and covers 10 different reasoning skills. We leverage ALERT to further investigate the role of finetuning. With extensive empirical analysis we find that language models learn more reasoning skills such as textual entailment, abductive reasoning, and analogical reasoning during finetuning stage compared to pretraining state. We also find that when language models are finetuned they tend to overfit to the prompt template, which hurts the robustness of models causing generalization problems.
translated by 谷歌翻译
Large language models show improved downstream task performance when prompted to generate step-by-step reasoning to justify their final answers. These reasoning steps greatly improve model interpretability and verification, but objectively studying their correctness (independent of the final answer) is difficult without reliable methods for automatic evaluation. We simply do not know how often the stated reasoning steps actually support the final end task predictions. In this work, we present ROSCOE, a suite of interpretable, unsupervised automatic scores that improve and extend previous text generation evaluation metrics. To evaluate ROSCOE against baseline metrics, we design a typology of reasoning errors and collect synthetic and human evaluation scores on commonly used reasoning datasets. In contrast with existing metrics, ROSCOE can measure semantic consistency, logicality, informativeness, fluency, and factuality - among other traits - by leveraging properties of step-by-step rationales. We empirically verify the strength of our metrics on five human annotated and six programmatically perturbed diagnostics datasets - covering a diverse set of tasks that require reasoning skills and show that ROSCOE can consistently outperform baseline metrics.
translated by 谷歌翻译
Nowadays, copy detection patterns (CDP) appear as a very promising anti-counterfeiting technology for physical object protection. However, the advent of deep learning as a powerful attacking tool has shown that the general authentication schemes are unable to compete and fail against such attacks. In this paper, we propose a new mathematical model of printing-imaging channel for the authentication of CDP together with a new detection scheme based on it. The results show that even deep learning created copy fakes unknown at the training stage can be reliably authenticated based on the proposed approach and using only digital references of CDP during authentication.
translated by 谷歌翻译
Physics-Informed Neural Networks (PINNs) are gaining popularity as a method for solving differential equations. While being more feasible in some contexts than the classical numerical techniques, PINNs still lack credibility. A remedy for that can be found in Uncertainty Quantification (UQ) which is just beginning to emerge in the context of PINNs. Assessing how well the trained PINN complies with imposed differential equation is the key to tackling uncertainty, yet there is lack of comprehensive methodology for this task. We propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the discrepancy between the B-PINN solution and the unknown true solution. We exploit recent results on error bounds for PINNs on linear dynamical systems and demonstrate the predictive uncertainty on a class of linear ODEs.
translated by 谷歌翻译